Planning by Prioritized Sweeping with Small Backups
نویسندگان
چکیده
Efficient planning plays a crucial role in model-based reinforcement learning. Traditionally, the main planning operation is a full backup based on the current estimates of the successor states. Consequently, its computation time is proportional to the number of successor states. In this paper, we introduce a new planning backup that uses only the current value of a single successor state and has a computation time independent of the number of successor states. This new backup, which we call a small backup, opens the door to a new class of model-based reinforcement learning methods that exhibit much finer control over their planning process than traditional methods. We empirically demonstrate that this increased flexibility allows for more efficient planning by showing that an implementation of prioritized sweeping based on small backups achieves a substantial performance improvement over classical implementations.
منابع مشابه
E cient Planning in MDPs by Small Backups
E cient planning plays a crucial role in model-based reinforcement learning. Traditionally, the main planning operation is a full backup based on the current estimates of the successor states. Consequently, its computation time is proportional to the number of successor states. In this paper, we introduce a new planning backup that uses only the current value of a single successor state and has...
متن کاملGeneralized Prioritized Sweeping
Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent’s limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this...
متن کاملEfficient Model-Based Deep Reinforcement Learning with Variational State Tabulation
Modern reinforcement learning algorithms reach super–human performance on many board and video games, but they are sample inefficient, i.e. they typically require significantly more playing experience than humans to reach an equal performance level. To improve sample efficiency, an agent may build a model of the environment and use planning methods to update its policy. In this article we intro...
متن کاملTopological Orders Based Planning for Solving POMDPs
Although partially observable Markov decision processes (POMDPs) have received significant attention in past years, to date, solving problems of realistic order of magnitude remains a serious challenge. In this context, techniques that accelerate fundamental algorithms have been a main focus of research. Among them prioritized solvers suggest solutions to the problem of ordering backup operatio...
متن کاملIs prioritized sweeping the better episodic control?
Episodic control has been proposed as a third approach to reinforcement learning, besides model-free and model-based control, by analogy with the three types of human memory. i.e. episodic, procedural and semantic memory. But the theoretical properties of episodic control are not well investigated. Here I show that in deterministic tree Markov decision processes, episodic control is equivalent ...
متن کامل